Device-Circuit-Architecture Co-Exploration for Computing-in-Memory Neural Accelerators

نویسندگان

چکیده

Co-exploration of neural architectures and hardware design is promising due to its capability simultaneously optimize network accuracy efficiency. However, state-of-the-art architecture search algorithms for the co-exploration are dedicated conventional von-Neumann computing architecture, whose performance heavily limited by well-known memory wall. In this article, we first bring computing-in-memory which can easily transcend wall, interplay with search, aiming find most efficient high maximized Such a novel combination makes opportunities boost performance, but also brings bunch challenges: The optimization space spans across multiple layers from device type circuit topology architecture; presence variation may drastically degrade performance. To address these challenges, propose cross-layer exploration framework, namely NACIM, jointly explores device, takes into consideration robust architectures, coupled design. Experimental results demonstrate that NACIM 0.45 percent loss in variation, compared 76.44 NAS without variation; addition, achieves an energy efficiency up 16.3 TOPs/W, 3.17x higher than NAS.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Calculation the phantom scatter factor for the linear accelerators device

Introduction: In the last few decades, a lot of monte carlo codes have been introduced for medical applications. this is important because through simulation, we are fully independent of the spectral scattering factor that we are not able measure in real terms. The effect of the radiation energy, the dimension of the radiation field, the sensitive volume of the ion chamber on t...

متن کامل

DRC2: Dynamically Reconfigurable Computing Circuit based on memory architecture

This paper presents a novel energy-efficient and Dynamically Reconfigurable Computing Circuit (DRC2) concept based on memory architecture for data-intensive (imaging, ...) and secure (cryptography, ...) applications. The proposed computing circuit is based on a 10-Transistor (10T) 3-Port SRAM bitcell array driven by a peripheral circuitry enabling all basic operations that can be traditionally ...

متن کامل

Neural Networks for Device and Circuit Modelling

The standard backpropagation theory for static feedforward neural networks can be generalized to include continuous dynamic effects like delays and phase shifts. The resulting non-quasistatic feedforward neural models can represent a wide class of nonlinear and dynamic systems, including arbitrary nonlinear static systems and arbitrary quasi-static systems as well as arbitrary lumped linear dyn...

متن کامل

Flexible On-chip Memory Architecture for DCNN Accelerators

Recent studies show that as the depth of Convolution Neural Networks (CNNs) increases for higher performance in different machine learning tasks, a major bottleneck in the improvement of deep CNNs (DCNNs) processing is the traffic between the accelerator and off-chip memory. However, current state-of-the-art accelerators cannot effectively reduce off-chip feature map traffic due to the limited ...

متن کامل

Novel synaptic memory device for neuromorphic computing

This report discusses the electrical characteristics of two-terminal synaptic memory devices capable of demonstrating an analog change in conductance in response to the varying amplitude and pulse-width of the applied signal. The devices are based on Mn doped HfO₂ material. The mechanism behind reconfiguration was studied and a unified model is presented to explain the underlying device physics...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Computers

سال: 2021

ISSN: ['1557-9956', '2326-3814', '0018-9340']

DOI: https://doi.org/10.1109/tc.2020.2991575